testimonial injustice
A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure
Epistemic injustice related to AI is a growing concern. In relation to machine learning models, epistemic injustice can have a diverse range of sources, ranging from epistemic opacity, the discriminatory automation of testimonial prejudice, and the distortion of human beliefs via generative AI's hallucinations to the exclusion of the global South in global AI governance, the execution of bureaucratic violence via algorithmic systems, and interactions with conversational artificial agents. Based on a proposed general taxonomy of epistemic injustice, this paper first sketches a taxonomy of the types of epistemic injustice in the context of AI, relying on the work of scholars from the fields of philosophy of technology, political philosophy and social epistemology. Secondly, an additional conceptualization on epistemic injustice in the context of AI is provided: generative hermeneutical erasure. I argue that this injustice the automation of 'epistemicide', the injustice done to epistemic agents in their capacity for collective sense-making through the suppression of difference in epistemology and conceptualization by LLMs. AI systems' 'view from nowhere' epistemically inferiorizes non-Western epistemologies and thereby contributes to the erosion of their epistemic particulars, gradually contributing to hermeneutical erasure. This work's relevance lies in proposal of a taxonomy that allows epistemic injustices to be mapped in the AI domain and the proposal of a novel form of AI-related epistemic injustice.
- Europe > Netherlands (0.04)
- South America > Chile (0.04)
- Europe > Sweden (0.04)
- (22 more...)
- Overview (0.48)
- Research Report (0.40)
- Government (1.00)
- Media (0.93)
- Information Technology (0.92)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
See Me and Believe Me: Causality and Intersectionality in Testimonial Injustice in Healthcare
Andrews, Kenya S., Ohannessian, Mesrob I., Zheleva, Elena
In medical settings, it is critical that all who are in need of care are correctly heard and understood. When this is not the case due to prejudices a listener has, the speaker is experiencing \emph{testimonial injustice}, which, building upon recent work, we quantify by the presence of several categories of unjust vocabulary in medical notes. In this paper, we use FCI, a causal discovery method, to study the degree to which certain demographic features could lead to marginalization (e.g., age, gender, and race) by way of contributing to testimonial injustice. To achieve this, we review physicians' notes for each patient, where we identify occurrences of unjust vocabulary, along with the demographic features present, and use causal discovery to build a Structural Causal Model (SCM) relating those demographic features to testimonial injustice. We analyze and discuss the resulting SCMs to show the interaction of these factors and how they influence the experience of injustice. Despite the potential presence of some confounding variables, we observe how one contributing feature can make a person more prone to experiencing another contributor of testimonial injustice. There is no single root of injustice and thus intersectionality cannot be ignored. These results call for considering more than singular or equalized attributes of who a person is when analyzing and improving their experiences of bias and injustice. This work is thus a first foray at using causal discovery to understand the nuanced experiences of patients in medical settings, and its insights could be used to guide design principles throughout healthcare, to build trust and promote better patient care.
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Texas > Tom Green County (0.04)
- (3 more...)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
Epistemic Injustice in Generative AI
Kay, Jackie, Kasirzadeh, Atoosa, Mohamed, Shakir
While traditional discussions of epistemic injustice have While algorithms have traditionally been leveraged to primarily centered on interpersonal human interactions present and organize human-generated content, the advent (McKinnon 2017; Tsosie 2012), existing research on algorithmic of generative AI has started to fundamentally shift this epistemic injustice has largely been limited to epistemic paradigm. Generative AI models can now create content - injustices produced by decision-making and classification spanning text, imagery, and beyond - that resembles that of algorithms. However, we argue that the distinctive authors, journalists, painters, or photographers. In this paper, characteristics of generative AI give rise to novel forms of we take generative AI to be the class of machine learning epistemic injustice that necessitate a dedicated analytical models trained on massive amounts of data, typically media framework. To address this, we expand upon the established such as text, images, audio or video, in order to produce philosophical discourse on epistemic injustice and introduce representative instances of such media (García-Peñalvo and an account of "generative algorithmic epistemic injustice," Vázquez-Ingelmo 2023).
- Asia > China (0.14)
- Africa > Eswatini > Manzini > Manzini (0.04)
- North America > United States > New York (0.04)
- (11 more...)
- Research Report (0.50)
- Overview (0.46)
- Media > News (1.00)
- Law (1.00)
- Education (1.00)
- (3 more...)